Influence Maximization (IM) is a classical combinatorial optimization problem, which can be widely used in mobile networks, social computing, and recommendation systems. It aims at selecting a small number of users such that maximizing the influence spread across the online social network. Because of its potential commercial and academic value, there are a lot of researchers focusing on studying the IM problem from different perspectives. The main challenge comes from the NP-hardness of the IM problem and \#P-hardness of estimating the influence spread, thus traditional algorithms for overcoming them can be categorized into two classes: heuristic algorithms and approximation algorithms. However, there is no theoretical guarantee for heuristic algorithms, and the theoretical design is close to the limit. Therefore, it is almost impossible to further optimize and improve their performance. With the rapid development of artificial intelligence, the technology based on Machine Learning (ML) has achieved remarkable achievements in many fields. In view of this, in recent years, a number of new methods have emerged to solve combinatorial optimization problems by using ML-based techniques. These methods have the advantages of fast solving speed and strong generalization ability to unknown graphs, which provide a brand-new direction for solving combinatorial optimization problems. Therefore, we abandon the traditional algorithms based on iterative search and review the recent development of ML-based methods, especially Deep Reinforcement Learning, to solve the IM problem and other variants in social networks. We focus on summarizing the relevant background knowledge, basic principles, common methods, and applied research. Finally, the challenges that need to be solved urgently in future IM research are pointed out.
translated by 谷歌翻译
电子商务网站属性价值提取(AVE)的主要挑战是如何处理多种产品的大量属性。尽管该挑战是通过一个问题回答(QA)方法来解决的,该方法在给定查询(属性)的产品数据中找到值,但对于稀有和模棱两可的查询,它不能有效地工作。因此,我们根据基于质量质量质量检查的AVE的查询(属性)的可能答案(属性)提出了简单的知识驱动查询扩展。我们从培训数据中检索查询(属性)的值以扩展查询。我们用两个技巧来训练一个模型,即知识辍学和知识令牌混合,这模仿了测试中价值知识的不完善。我们清洁版的Aliexpress数据集的实验结果表明,我们的方法改善了AVE的性能(+6.08宏F1),尤其是对于稀有和模棱两可的属性(分别为+7.82和+6.86宏F1)。
translated by 谷歌翻译
高斯平滑的最佳运输(GOT)框架,在Goldfeld等人开创。 (2020)并随后被一系列后续文件,在统计,机器学习,信息理论和相关领域的研究人员中迅速引起了注意。在其中做出的一个关键观察是,通过适应Get框架而不是其未平滑的对应物,可以提升用于使用经验测量来近似于近似真实数据生成分布的维度的诅咒。目前的论文表明,相关观察适用于离散指数家庭模型中非参数混合分布的估计,在Get成本下,非参数MLE的估计精度可以加速到多项式速率。这与基于无缝度量的经典子多项式速率鲜明对比,这不能从信息理论的角度来改进。我们分析中的一个关键步骤是建立高斯复杂的LipsChitz函数的新杰克逊型近似。这种洞察力弥补了分析非参数MLES和新的框架的现有技术。
translated by 谷歌翻译
主题进化建模近几十年来收到了重大关注。虽然已经提出了各种主题演进模型,但大多数研究都关注单一文件语料库。但是,在实践中,我们可以轻松访问来自多个来源的数据,并且还可以观察它们之间的关系。然后,识别多个文本语料库之间的关系并进一步利用这种关系来提高主题建模。在这项工作中,我们专注于两个文本语料库之间的特殊关系,我们将其定义为“滞后关系”。这种关系表征了一个文本语料库会影响未来在另一个文本语料库中讨论的主题的现象。要发现引导滞后关系,我们提出了一个共同动态的主题模型,并开发了嵌入扩展,以解决大规模文本语料库的建模问题。通过认可的引导关系,可以改善两个文本语料库的相似性,可以改善在两种语料中学习的主题质量。我们使用合成数据进行数值调查联合动态主题建模方法的性能。最后,我们在两个文本语料库上应用拟议的模型,包括统计文件和毕业论文。结果表明,拟议的模型可以很好地认识到两种语料库之间的引导滞后关系,也发现了两种语料库的具体和共享主题模式。
translated by 谷歌翻译
Bradley-terry-luce(BTL)模型是一个基准模型,用于个人之间的成对比较。尽管最近在几种流行程序的一阶渐近学上进行了最新进展,但对BTL模型中不确定性定量的理解基本上仍然不完整,尤其是当基础比较图很少时。在本文中,我们通过重点关注两个估计量的估计器来填补这一空白:最大似然估计器(MLE)和频谱估计器。使用统一的证明策略,我们在基础比较图的最稀少的可能的制度(最多达到某些多同源因​​素)中,为两个估计量提供了尖锐而均匀的非反应膨胀。这些扩展使我们能够获得:(i)两个估计器的有限维中心限制定理; (ii)构建个人等级的置信区间; (iii)$ \ ell_2 $估计的最佳常数,这是由MLE实现的,但不是由光谱估计器实现的。我们的证明是基于二阶剩余矢量的自洽方程和新的两次分析分析。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Automatic music generation with artificial intelligence typically requires a large amount of data which is hard to obtain for many less common genres and musical instruments. To tackle this issue, we present ongoing work and preliminary findings on the possibility for deep models to transfer knowledge from language to music, by finetuning large language models pre-trained on a massive text corpus on only hundreds of MIDI files of drum performances. We show that by doing so, one of the largest, state-of-the-art models (GPT3) is capable of generating reasonable drum grooves, while models that are not pre-trained (Transformer) shows no such ability beyond naive repetition. Evaluating generated music is a challenging task, more so is evaluating drum grooves with little precedence in literature. Hence, we propose a tailored structural evaluation method and analyze drum grooves produced by GPT3 compared to those played by human professionals, exposing the strengths and weaknesses of such generation by language-to-music transfer. Our findings suggest that language-to-music transfer learning with large language models is viable and promising.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have shown satisfying performance on various graph learning tasks. To achieve better fitting capability, most GNNs are with a large number of parameters, which makes these GNNs computationally expensive. Therefore, it is difficult to deploy them onto edge devices with scarce computational resources, e.g., mobile phones and wearable smart devices. Knowledge Distillation (KD) is a common solution to compress GNNs, where a light-weighted model (i.e., the student model) is encouraged to mimic the behavior of a computationally expensive GNN (i.e., the teacher GNN model). Nevertheless, most existing GNN-based KD methods lack fairness consideration. As a consequence, the student model usually inherits and even exaggerates the bias from the teacher GNN. To handle such a problem, we take initial steps towards fair knowledge distillation for GNNs. Specifically, we first formulate a novel problem of fair knowledge distillation for GNN-based teacher-student frameworks. Then we propose a principled framework named RELIANT to mitigate the bias exhibited by the student model. Notably, the design of RELIANT is decoupled from any specific teacher and student model structures, and thus can be easily adapted to various GNN-based KD frameworks. We perform extensive experiments on multiple real-world datasets, which corroborates that RELIANT achieves less biased GNN knowledge distillation while maintaining high prediction utility.
translated by 谷歌翻译